The science fiction author Issac Asimov developed three 'laws of robotics', which have been inlfuential in subsequent discussions of abstract AI ethics Asimov explores the laws in the collection "I, Robot", in particular how these simple laws have unexpected consequences. The laws are as follows, with the earlier laws taking precedenc over later ones.
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Used on Chap. 15: page 344; Chap. 23: page 563
Also known as law of robotics, three laws of robotics
Links:
Wikipedia: Three Laws of Robotics
Wikipedia: I, Robot (the book)
IMDb: I, Robot (the film)